62 research outputs found

    HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    Full text link
    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.Comment: 15 pages, 9 figure

    CMS workflow execution using intelligent job scheduling and data access strategies

    Get PDF
    Complex scientific workflows can process large amounts of data using thousands of tasks. The turnaround times of these workflows are often affected by various latencies such as the resource discovery, scheduling and data access latencies for the individual workflow processes or actors. Minimizing these latencies will improve the overall execution time of a workflow and thus lead to a more efficient and robust processing environment. In this paper, we propose a pilot job concept that has intelligent data reuse and job execution strategies to minimize the scheduling, queuing, execution and data access latencies. The results have shown that significant improvements in the overall turnaround time of a workflow can be achieved with this approach. The proposed approach has been evaluated, first using the CMS Tier0 data processing workflow, and then simulating the workflows to evaluate its effectiveness in a controlled environment. © 2011 IEEE

    The role of caretakers in disease dynamics

    Full text link
    One of the key challenges in modeling the dynamics of contagion phenomena is to understand how the structure of social interactions shapes the time course of a disease. Complex network theory has provided significant advances in this context. However, awareness of an epidemic in a population typically yields behavioral changes that correspond to changes in the network structure on which the disease evolves. This feedback mechanism has not been investigated in depth. For example, one would intuitively expect susceptible individuals to avoid other infecteds. However, doctors treating patients or parents tending sick children may also increase the amount of contact made with an infecteds, in an effort to speed up recovery but also exposing themselves to higher risks of infection. We study the role of these caretaker links in an adaptive network models where individuals react to a disease by increasing or decreasing the amount of contact they make with infected individuals. We find that pure avoidance, with only few caretaker links, is the best strategy for curtailing an SIS disease in networks that possess a large topological variability. In more homogeneous networks, disease prevalence is decreased for low concentrations of caretakers whereas a high prevalence emerges if caretaker concentration passes a well defined critical value.Comment: 8 pages, 9 figure

    Practice and consensus-based strategies in diagnosing and managing systemic juvenile idiopathic arthritis in Germany

    Get PDF
    Background: Systemic juvenile idiopathic arthritis (SJIA) is an autoinflammatory disease associated with chronic arthritis. Early diagnosis and effective therapy of SJIA is desirable, so that complications are avoided. The PRO-KIND initiative of the German Society for Pediatric Rheumatology (GKJR) aims to define consensus-based strategies to harmonize diagnostic and therapeutic approaches in Germany. Methods: We analyzed data on patients diagnosed with SJIA from 3 national registries in Germany. Subsequently, via online surveys and teleconferences among pediatric rheumatologists with a special expertise in the treatment of SJIA, we identified current diagnostic and treatment approaches in Germany. Those were harmonized via the formulation of statements and, supported by findings from a literature search. Finally, an in-person consensus conference using nominal group technique was held to further modify and consent the statements. Results: Up to 50% of patients diagnosed with SJIA in Germany do not fulfill the International League of Associations for Rheumatology (ILAR) classification criteria, mostly due to the absence of chronic arthritis. Our findings suggest that chronic arthritis is not obligatory for the diagnosis and treatment of SJIA, allowing a diagnosis of probable SJIA Malignant, infectious and hereditary autoinflammatory diseases should be considered before rendering a diagnosis of probable SJIA There is substantial variability in the initial treatment of SJIA. Based on registry data, most patients initially receive systemic glucocorticoids, however, increasingly substituted or accompanied by biological agents, i.e. interleukin (IL)-1 and IL-6 blockade (up to 27.2% of patients). We identified preferred initial therapies for probable and definitive SJIA, including step-up patterns and treatment targets for the short-term (resolution of fever, decrease in C-reactive protein by 50% within 7 days), the mid-term (improvement in physician global and active joint count by at least 50% or a JADAS-10 score of maximally 5.4 within 4 weeks) and the long-term (glucocorticoid-free clinically inactive disease within 6 to 12 months), and an explicit treat-to-target strategy. Conclusions: We developed consensus-based strategies regarding the diagnosis and treatment of probable or definitive SJIA in Germany

    Jets and energy flow in photon-proton collisions at HERA

    Get PDF
    Properties of the hadronic final state in photoproduction events with large transverse energy are studied at the electron-proton collider HERA. Distributions of the transverse energy, jets and underlying event energy are compared to \overline{p}p data and QCD calculations. The comparisons show that the \gamma p events can be consistently described by QCD models including -- in addition to the primary hard scattering process -- interactions between the two beam remnants. The differential jet cross sections d\sigma/dE_T^{jet} and d\sigma/d\eta^{jet} are measured

    Разработка интерактивной моделирующей системы технологии низкотемпературной сепарации газа

    Get PDF
    We present a study of J ψ meson production in collisions of 26.7 GeV electrons with 820 GeV protons, performed with the H1-detector at the HERA collider at DESY. The J ψ mesons are detected via their leptonic decays both to electrons and muons. Requiring exactly two particles in the detector, a cross section of σ(ep → J ψ X) = (8.8±2.0±2.2) nb is determined for 30 GeV ≤ W γp ≤ 180 GeV and Q 2 ≲ 4 GeV 2 . Using the flux of quasi-real photons with Q 2 ≲ 4 GeV 2 , a total production cross section of σ ( γp → J / ψX ) = (56±13±14) nb is derived at an average W γp =90 GeV. The distribution of the squared momentum transfer t from the proton to the J ψ can be fitted using an exponential exp(− b ∥ t ∥) below a ∥ t ∥ of 0.75 GeV 2 yielding a slope parameter of b = (4.7±1.9) GeV −2

    Saving Human Lives: What Complexity Science and Information Systems can Contribute

    Get PDF
    We discuss models and data of crowd disasters, crime, terrorism, war and disease spreading to show that conventional recipes, such as deterrence strategies, are often not effective and sufficient to contain them. Many common approaches do not provide a good picture of the actual system behavior, because they neglect feedback loops, instabilities and cascade effects. The complex and often counter-intuitive behavior of social systems and their macro-level collective dynamics can be better understood by means of complexity science. We highlight that a suitable system design and management can help to stop undesirable cascade effects and to enable favorable kinds of self-organization in the system. In such a way, complexity science can help to save human lives.Comment: 67 pages, 25 figures; accepted for publication in Journal of Statistical Physics [for related work see http://www.futurict.eu/

    American College of Rheumatology Provisional Criteria for Clinically Relevant Improvement in Children and Adolescents With Childhood-Onset Systemic Lupus Erythematosus

    Get PDF
    10.1002/acr.23834ARTHRITIS CARE & RESEARCH715579-59

    Enabling opportunistic resources for CMS Computing Operations

    No full text
    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them

    The CMS TierO goes Cloud and Grid for LHC Run 2

    No full text
    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threaded framework to deal with the increased event complexity and to ensure efficient use of the resources. This contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015
    corecore